22 research outputs found
Successive Refinement with Decoder Cooperation and its Channel Coding Duals
We study cooperation in multi terminal source coding models involving
successive refinement. Specifically, we study the case of a single encoder and
two decoders, where the encoder provides a common description to both the
decoders and a private description to only one of the decoders. The decoders
cooperate via cribbing, i.e., the decoder with access only to the common
description is allowed to observe, in addition, a deterministic function of the
reconstruction symbols produced by the other. We characterize the fundamental
performance limits in the respective settings of non-causal, strictly-causal
and causal cribbing. We use a new coding scheme, referred to as Forward
Encoding and Block Markov Decoding, which is a variant of one recently used by
Cuff and Zhao for coordination via implicit communication. Finally, we use the
insight gained to introduce and solve some dual channel coding scenarios
involving Multiple Access Channels with cribbing.Comment: 55 pages, 15 figures, 8 tables, submitted to IEEE Transactions on
Information Theory. A shorter version submitted to ISIT 201
Capacity of a POST Channel with and without Feedback
We consider finite state channels where the state of the channel is its
previous output. We refer to these as POST (Previous Output is the STate)
channels. We first focus on POST() channels. These channels have binary
inputs and outputs, where the state determines if the channel behaves as a
or an channel, both with parameter . %with parameter We
show that the non feedback capacity of the POST() channel equals its
feedback capacity, despite the memory of the channel. The proof of this
surprising result is based on showing that the induced output distribution,
when maximizing the directed information in the presence of feedback, can also
be achieved by an input distribution that does not utilize of the feedback. We
show that this is a sufficient condition for the feedback capacity to equal the
non feedback capacity for any finite state channel. We show that the result
carries over from the POST() channel to a binary POST channel where the
previous output determines whether the current channel will be binary with
parameters or . Finally, we show that, in general, feedback may
increase the capacity of a POST channel
ClusterGAN : Latent Space Clustering in Generative Adversarial Networks
Generative Adversarial networks (GANs) have obtained remarkable success in
many unsupervised learning tasks and unarguably, clustering is an important
unsupervised learning problem. While one can potentially exploit the
latent-space back-projection in GANs to cluster, we demonstrate that the
cluster structure is not retained in the GAN latent space.
In this paper, we propose ClusterGAN as a new mechanism for clustering using
GANs. By sampling latent variables from a mixture of one-hot encoded variables
and continuous latent variables, coupled with an inverse network (which
projects the data to the latent space) trained jointly with a clustering
specific loss, we are able to achieve clustering in the latent space. Our
results show a remarkable phenomenon that GANs can preserve latent space
interpolation across categories, even though the discriminator is never exposed
to such vectors. We compare our results with various clustering baselines and
demonstrate superior performance on both synthetic and real datasets.Comment: GANs, Clustering, Latent Space, Interpolation (v2 : Typos fixed, some
new experiments added, reported metrics on best validated model.